Explicit examples in ergodic optimization

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stably ergodic approximation: two examples

It has been conjectured that the stably ergodic diieomorphisms are open and dense in the space of volume-preserving, partially hyperbolic diieomorphisms of a compact manifold. In this paper we deal with two recalcitrant examples; the standard map cross Anosov and the ergodic automorphisms of the four torus. In both cases we show that they may be approximated by stably ergodic diieomorphisms whi...

متن کامل

Ergodic Optimization

The field is a relatively recently established subfield of ergodic theory, and has significant input from the two well-established areas of symbolic dynamics and Lagrangian dynamics. The large-scale picture of the field is that one is interested in optimizing potential functions over the (typically highly complex) class of invariant measures for a dynamical system. Tools that have been employed...

متن کامل

Ergodic Optimization

Let f be a real-valued function defined on the phase space of a dynamical system. Ergodic optimization is the study of those orbits, or invariant probability measures, whose ergodic f -average is as large as possible. In these notes we establish some basic aspects of the theory: equivalent definitions of the maximum ergodic average, existence and generic uniqueness of maximizing measures, and t...

متن کامل

Ergodic Results in Subgradient Optimization

Subgradient methods are popular tools for nonsmooth, convex minimization , especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions, since the subgradients used i...

متن کامل

Ergodic Convergence in Subgradient Optimization

When nonsmooth, convex minimizationproblems are solved by subgradientoptimizationmethods, the subgradients used will in general not accumulate to subgradients which verify the optimal-ity of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions. Further, certain ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: São Paulo Journal of Mathematical Sciences

سال: 2020

ISSN: 1982-6907,2316-9028

DOI: 10.1007/s40863-020-00188-y